18 research outputs found

    Prospects and problems in designing image oriented information systems

    Get PDF
    There are slowly maturing and growing about us today a number of techniques which are likely to have a very significant effect upon the implementation of information systems in the near future. One of these techniques is pictorial data handling and interpretation, which is a subclass of the general area called pattern recognition. Pictorial data processing first became volumetrically significant in the case of photographic output of synchrotron bubble chambers which now deliver several million photographs per year. More recently, a surge of interest has developed in automatic interpretation of biological, medical, and weather satellite pictorial data. The automatic scanning of microscopic slides for the purpose of identifying certain morphological characters is an example of a rather complex task in the area of biological/medical laboratory automation. Some new viewpoints have begun to emerge from the experience of grappling with large volume pictorial data handling problems.published or submitted for publicatio

    An Approach to Artificial Nonsymbolic Cognition

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryJoint Services Electronics Program / DAAB 07-67-C-0199Office of Education / OE C-1-7-071213-455

    Cumulative Learning Using Functionally Similar States

    Get PDF
    In this paper we propose a Cumulative Learning System for artificial agents that ses the idea of Functional Similarity between states. The general idea of Cumulative Learning is to build a cognitive architecture for an artificial agent that 'lives' for a long time and solves many related tasks during its lifetime. Two states (or situations) are said to be functionally similar (FS) with respect to an action if the action induces the same change on both the states. We define the notion of FS for Markov Environments, and then use that to develop a Predictive Model (PM) that given states and actions observed so far, predicts the next state when an action is taken in some novel state (state never observed before or often) - i.e. the PM is a novel type of forward model. We also describe a planning mechanism for goal directed MDPs with multiple goals that uses the PM to solve tasks quicker using information from solutions to similar tasks solved previously by the agent. After establishing some necessary theoretical properties of both we perform experiments that shows the efficacy of our method. We also outline how the current system, which can actually be categorized as a Lifelong Learning system, may be extended to a complete cumulative learning system

    Transfer Learning using Kolmogorov Complexity: Basic Theory and Empirical Evaluations

    Get PDF
    In transfer learning we aim to solve new problems quicker by using information gained from solving related problems. Transfer learning has been successful in practice, and extensive PAC analysis of these methods has been developed. However it is not yet clear how to define relatedness between tasks. This is considered as a major problem as, aside from being conceptually troubling, it makes it unclear how much information to transfer and when and how to transfer it. In this paper we propose to measure the amount of information one task contains about another using conditional Kolmogorov complexity between the tasks. We show how existing theory neatly solves the problem of measuring relatedness and transferring the .right. amount of information in sequential transfer learning in a Bayesian setting. The theory also suggests that, in a very formal and precise sense, no other transfer method can do much better than the Kolmogorov Complexity theoretic transfer method, and that sequential transfer is always justified. We also develop a practical approximation to the method and use it to transfer information between 8 arbitrarily chosen databases from the UCI ML repository

    Cumulative Learning: Towards Designing Cognitive Architectures for Artificial Agents that Have a Lifetime

    Get PDF
    Cognitive architectures should be designed with learning performance as a central goal. A critical feature of intelligence is the ability to apply the knowledge learned in one context to a new context. A cognitive agent is expected to have a lifetime, in which it has to learn to solve several different types of tasks in its environment. In such a situation, the agent should become increasingly better adapted to its environment. This means that its learning performance on each new task should improve as it is able to transfer knowledge learned in previous tasks to the solution of the new task. We call this ability cumulative learning. Cumulative learning thus refers to the accumulation of learned knowledge over a lifetime, and its application to the learning of new tasks. We believe that creating agents that exhibit sophisticated, long-term, adaptive behavior is going to require this kind of approach

    Abstract

    No full text
    Previous work in knowledge transfer in machine learning has been restricted to tasks in a single domain. However, evidence from psychology and neuroscience suggests that humans are capable of transferring knowledge across domains. We present here a novel learning method, based on neuroevolution, for transferring knowledge across domains. We use many-layered, sparsely-connected neural networks in order to learn a structural representation of tasks. Then we mine frequent sub-graphs in order to discover sub-networks that are useful for multiple tasks. These sub-networks are then used as primitives for speeding up the learning of subsequent related tasks, which may be in different domains.
    corecore